latent trace norm
- Asia > India (0.04)
- Africa > Senegal > Kolda Region > Kolda (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > India (0.04)
- Africa > Senegal > Kolda Region > Kolda (0.04)
- North America > Canada > Quebec > Montreal (0.04)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
"NIPS Neural Information Processing Systems 8-11th December 2014, Montreal, Canada",,, "Paper ID:","1466" "Title:","Multitask learning meets tensor factorization: task imputation via convex optimization" Current Reviews First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. In this paper, the authors study the problem of learning a tensor for the purpose of linear multi-task learning. The authors propose a new weighted version of a previously proposed tensor norm (called latent trace norm) and show that the introduced rescaling yields better bounds on the excess risk as well as improved recovery performance on some datasets. The paper is well written and organized, and the proposed rescaling can potentially have a significant impact in practice, although a more extensive experimental evaluation would have been desirable. The technical results seem to be appropriate and correctly proven.
Multitask learning meets tensor factorization: task imputation via convex optimization
Kishan Wimalawarne, Masashi Sugiyama, Ryota Tomioka
We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear-)rank of the tensor controls the amount of sharing of information among tasks. Two types of convex relaxations have recently been proposed for the tensor multilinear rank. However, we argue that both of them are not optimal in the context of multitask learning in which the dimensions or multilinear rank are typically heterogeneous. We propose a new norm, which we call the scaled latent trace norm and analyze the excess risk of all the three norms. The results apply to various settings including matrix and tensor completion, multitask learning, and multilinear multitask learning. Both the theory and experiments support the advantage of the new norm when the tensor is not equal-sized and we do not a priori know which mode is low rank.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.15)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (2 more...)
- Education (0.48)
- Consumer Products & Services > Restaurants (0.46)
Multitask learning meets tensor factorization: task imputation via convex optimization
We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear-)rank of the tensor controls the amount of sharing of information among tasks. Two types of convex relaxations have recently been proposed for the tensor multilinear rank. However, we argue that both of them are not optimal in the context of multitask learning in which the dimensions or multilinear rank are typically heterogeneous. We propose a new norm, which we call the scaled latent trace norm and analyze the excess risk of all the three norms. The results apply to various settings including matrix and tensor completion, multitask learning, and multilinear multitask learning. Both the theory and experiments support the advantage of the new norm when the tensor is not equal-sized and we do not a priori know which mode is low rank.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.15)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (2 more...)
- Education (0.48)
- Consumer Products & Services > Restaurants (0.46)
A Dual Framework for Low-rank Tensor Completion
Nimishakavi, Madhav, Jawanpuria, Pratik Kumar, Mishra, Bamdev
One of the popular approaches for low-rank tensor completion is to use the latent trace norm regularization. However, most existing works in this direction learn a sparse combination of tensors. In this work, we fill this gap by proposing a variant of the latent trace norm that helps in learning a non-sparse combination of tensors. We develop a dual framework for solving the low-rank tensor completion problem. We first show a novel characterization of the dual solution space with an interesting factorization of the optimal solution. Overall, the optimal solution is shown to lie on a Cartesian product of Riemannian manifolds. Furthermore, we exploit the versatile Riemannian optimization framework for proposing computationally efficient trust region algorithm. The experiments illustrate the efficacy of the proposed algorithm on several real-world datasets across applications.
- Asia > India (0.04)
- Africa > Senegal > Kolda Region > Kolda (0.04)
- North America > Canada > Quebec > Montreal (0.04)
A Dual Framework for Low-rank Tensor Completion
Nimishakavi, Madhav, Jawanpuria, Pratik Kumar, Mishra, Bamdev
One of the popular approaches for low-rank tensor completion is to use the latent trace norm regularization. However, most existing works in this direction learn a sparse combination of tensors. In this work, we fill this gap by proposing a variant of the latent trace norm that helps in learning a non-sparse combination of tensors. We develop a dual framework for solving the low-rank tensor completion problem. We first show a novel characterization of the dual solution space with an interesting factorization of the optimal solution. Overall, the optimal solution is shown to lie on a Cartesian product of Riemannian manifolds. Furthermore, we exploit the versatile Riemannian optimization framework for proposing computationally efficient trust region algorithm. The experiments illustrate the efficacy of the proposed algorithm on several real-world datasets across applications.
- Asia > India (0.04)
- Africa > Senegal > Kolda Region > Kolda (0.04)
- North America > Canada > Quebec > Montreal (0.04)
A dual framework for trace norm regularized low-rank tensor completion
Nimishakavi, Madhav, Jawanpuria, Pratik, Mishra, Bamdev
One of the popular approaches for low-rank tensor completion is to use the latent trace norm as a low-rank regularizer. However, most of the existing works learn a sparse combination of tensors. In this work, we fill this gap by proposing a variant of the latent trace norm which helps to learn a non-sparse combination of tensors. We develop a dual framework for solving the problem of latent trace norm regularized low-rank tensor completion. In this framework, we first show a novel characterization of the solution space with a novel factorization, and then, propose two scalable optimization formulations. The problems are shown to lie on a Cartesian product of Riemannian spectrahedron manifolds. We exploit the versatile Riemannian optimization framework for proposing computationally efficient trust-region algorithms. The experiments show the good performance of the proposed algorithms on several real-world data sets in different applications.
- North America > United States (0.14)
- Africa > Senegal > Kolda Region > Kolda (0.04)
Convex Coupled Matrix and Tensor Completion
Wimalawarne, Kishan, Yamada, Makoto, Mamitsuka, Hiroshi
We propose a set of convex low rank inducing norms for a coupled matrices and tensors (hereafter coupled tensors), which shares information between matrices and tensors through common modes. More specifically, we propose a mixture of the overlapped trace norm and the latent norms with the matrix trace norm, and then, we propose a new completion algorithm based on the proposed norms. A key advantage of the proposed norms is that it is convex and can find a globally optimal solution, while existing methods for coupled learning are non-convex. Furthermore, we analyze the excess risk bounds of the completion model regularized by our proposed norms which show that our proposed norms can exploit the low rankness of coupled tensors leading to better bounds compared to uncoupled norms. Through synthetic and real-world data experiments, we show that the proposed completion algorithm compares favorably with existing completion algorithms.
Theoretical and Experimental Analyses of Tensor-Based Regression and Classification
Wimalawarne, Kishan, Tomioka, Ryota, Sugiyama, Masashi
We theoretically and experimentally investigate tensor-based regression and classification. Our focus is regularization with various tensor norms, including the overlapped trace norm, the latent trace norm, and the scaled latent trace norm. We first give dual optimization methods using the alternating direction method of multipliers, which is computationally efficient when the number of training samples is moderate. We then theoretically derive an excess risk bound for each tensor norm and clarify their behavior. Finally, we perform extensive experiments using simulated and real data and demonstrate the superiority of tensor-based learning methods over vector- and matrix-based learning methods.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Africa > Senegal > Kolda Region > Kolda (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)